Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Scheduled competition learning based multi-objective particle swarm optimization algorithm
LIU Ming, DONG Minggang, JING Chao
Journal of Computer Applications    2019, 39 (2): 330-335.   DOI: 10.11772/j.issn.1001-9081.2018061201
Abstract649)      PDF (933KB)(413)       Save
In order to improve the diversity of population and the convergence performance of algorithm, a Scheduled competition learning based Multi-Objective Particle Swarm Optimization (SMOPSO) algorithm was proposed. The multi-objective particle swarm optimization algorithm and the competition learning mechanism were combined and the competition learning mechanism was used in every certain iterations to maintain the diversity of the population. Meanwhile, to improve the convergence of algorithm without using the global best external archive, the elite particles were selected from the current swarm, and then a global best particle was randomly selected from these elite particles. The performance of the proposed algorithm was verified on 21 benchmarks and compared with 8 algorithms, such as Multi-objective Particle Swarm Optimization algorithm based on Decomposition (MPSOD), Competitive Mechanism based multi-Objective Particle Swarm Optimizer (CMOPSO) and Reference Vector guided Evolutionary Algorithm (RVEA). The experimental results prove that the proposed algorithm can get a more uniform Pareto front and a smaller Inverted Generational Distance (IGD).
Reference | Related Articles | Metrics
Multi-objective differential evolution algorithm with improved ranking-based mutation
LIU Bao, DONG Minggang, JING Chao
Journal of Computer Applications    2018, 38 (8): 2157-2163.   DOI: 10.11772/j.issn.1001-9081.2018010260
Abstract876)      PDF (1040KB)(528)       Save
Focusing on the slow convergence and the poor uniformity of multi-objective differential evolution algorithms when solving multi-objective optimization problems, a Multi-Objective Differential Evolution algorithm with Improved Ranking-based Mutation (MODE-IRM) was proposed. The optimal individual involved in the mutation was used as the base vector, which accelerated the resolving speed of the ranking-based mutation operator. In addition, a strategy of opposition-based parameter was adopted to dynamically adjust the values of parameters in different optimization stages, so the convergence rate was further accelerated. Finally, an improved crowding distance calculation formula was introduced in the sort operation, which improved the uniformity of solutions. Simulation experiments were conducted on the standard multi-objective optimization problems including ZDTl-ZDT4, ZDT6 and DTLZ6-DTLZ7. MODE-IRM's overall performance was much better than MODE-RMO and other three algorithms of the PlatEMO including MOEA/D-DE (Multiobjective Evolutionary Algorithm based on Decomposition with Differential Evolution), RM-MEDA (Regularity Model-based Multi-objective Estimation of Distribution Algorithm) and IM-MOEA (Inverse Modeling Multi-objective Evolutionary Algorithm). Moreover, in terms of the performance metrics including GD (Generational Distance), IGD (Inverted Generational Distance) and SP (Spacing), the mean and variance of MODE-IRM on all problems were significantly less than those of MODE-RMO. The simulation results show that MODE-IRM has better performance in convergence and uniformity.
Reference | Related Articles | Metrics
Genetic instance selection algorithm for K-nearest neighbor classifier
HUANG Yuyang, DONG Minggang, JING Chao
Journal of Computer Applications    2018, 38 (11): 3112-3118.   DOI: 10.11772/j.issn.1001-9081.2018041337
Abstract400)      PDF (1063KB)(341)       Save
Traditional instance selection algorithms may remove non-noise samples by mistake and have low algorithm efficiency. For this issue, a genetic instance selection algorithm for K-Nearest Neighbor ( KNN) classifier was proposed. A two-stage selection mechanism based on decision tree and genetic algorithm was used in the algorithm. Firstly, the decision tree was used to determine the range of noise samples. Then, the genetic algorithm was used to remove the noise samples in this range precisely, which could reduce the risk of mistaken remove effectively and improve the algorithm efficiency. Secondly, the 1NN-based selection strategy of validation set was proposed to improve the instance selection accuracy of the genetic algorithm. Finally, the MSE (Mean Squared Error)-based objective function was used as the fitness function, which could improve the effectiveness and stability of the algorithm. Compared with PRe-classification based KNN (PR KNN), Instance and Feature Selection based on Cooperative Coevolution (IFS-CoCo), K-Nearest Neighbors ( KNN), the improvement in classification accuracy is 0.07 to 26.9 percentage points, 0.03 to 11.8 percentage points and 0.2 to 12.64 percentage points respectively, the improvement in AUC (Area Under Curve) and Kappa is 0.25 to 18.32 percentage points, 1.27 to 23.29 percentage points, and 0.04 to 12.82 percentage points respectively. The experimental results show that the proposed method has advantages in terms of classification accuracy and classification efficiency.
Reference | Related Articles | Metrics
Improved particle swarm optimization algorithm based on Gaussian disturbance and natural selection
AI Bing, DONG Minggang
Journal of Computer Applications    2016, 36 (3): 687-691.   DOI: 10.11772/j.issn.1001-9081.2016.03.687
Abstract552)      PDF (781KB)(456)       Save
In order to effectively balance the global and local search performance of Particle Swarm Optimization (PSO) algorithm, an improved PSO algorithm based on Gaussian disturbance and natural selection (GDNSPSO) was proposed. Based on the simple PSO algorithm, the improved algorithm took into account the mutual influence among all individual best particles and replaced the individual best value of each particle with the mean value of them which contained Gaussian disturbance. And the evolution mechanism of survival of the fittest in natural selection was employed to improve the performance of algorithm. At the same time, the nonlinear adjustment of the inertia weight was adjusted by the cosine function with adaptive adjustment of the threshold of inertia weight and the adjustment strategy of the asynchronous change was used to improve the learning ability of the particles. The simulation results show that the GDNSPSO algorithm can improve the convergence speed and precision, and it is better than some recently proposed improved PSO algorithms.
Reference | Related Articles | Metrics